popular gaussian graphical model
Review for NeurIPS paper: Learning Some Popular Gaussian Graphical Models without Condition Number Bounds
Weaknesses: Attractiveness is a very strong condition. To argue why the paper is a significant contribution to the literature, the authors should elaborate a bit more why this assumption is important from the practical point of view. Perhaps a good place to start is a recent paper: Agrawal, Raj, Uma Roy, and Caroline Uhler. Another weakness (but this is a growing problem overall) is that the whole interesting math is moved to the appendix and the paper became only a walk-through through the main ideas. I personally do not like this style as it is hard to get the feeling behind the structure of the problem without jumping back and forth between the paper and the appendix.
Review for NeurIPS paper: Learning Some Popular Gaussian Graphical Models without Condition Number Bounds
This paper studies the well-known problem of structure learning Gaussian Graphical Models. This is simply the problem of learning the zero-non-zero structure of the "precision" matrix (inverse Covariance matrix) of an unknown gaussian distribution from samples. All known efficient algorithms for the problem suffer from a running time and sample complexity dependence on the condition number of the unknown covariance matrix. This paper gives an algorithm, which, for some covariance matrices that satisfy some structural assumptions (walk summabililty, attractive) gives an efficient algorithm that does not depend on the condition number of the unknown covariance (which can be arbitrarily ill-conditioned under the assumptions) . The reviewers (with clarifications in the author feedback) were convinced of both the motivation and non-triviality of the assumptions made and found the algorithmic contributions of this paper important.
Learning Some Popular Gaussian Graphical Models without Condition Number Bounds
Gaussian Graphical Models (GGMs) have wide-ranging applications in machine learning and the natural and social sciences. In most of the settings in which they are applied, the number of observed samples is much smaller than the dimension and they are assumed to be sparse. While there are a variety of algorithms (e.g. Graphical Lasso, CLIME) that provably recover the graph structure with a logarithmic number of samples, to do so they require various assumptions on the well-conditioning of the precision matrix that are not information-theoretically necessary. Here we give the first fixed polynomial-time algorithms for learning attractive GGMs and walk-summable GGMs with a logarithmic number of samples without any such assumptions.